8 research outputs found

    Practical acquisition and rendering of diffraction effects in surface reflectance

    Get PDF
    We propose two novel contributions for measurement based rendering of diffraction effects in surface reflectance of planar homogeneous diffractive materials. As a general solution for commonly manufactured materials, we propose a practical data-driven rendering technique and a measurement approach to efficiently render complex diffraction effects in real-time. Our measurement step simply involves photographing a planar diffractive sam- ple illuminated with an LED flash. Here, we directly record the resultant diffraction pattern on the sample surface due to a narrow band point source illumination. Furthermore, we propose an efficient rendering method that exploits the measurement in conjunction with the Huygens-Fresnel principle to fit relevant diffraction parameters based on a first order approximation. Our proposed data-driven rendering method requires the precomputation of a single diffraction look up table for accurate spectral rendering of com- plex diffraction effects. Secondly, for sharp specular samples, we propose a novel method for practical measurement of the underlying diffraction grating using out-of-focus “bokeh” photography of the specular highlight. We demonstrate how the measured bokeh can be employed as a height field to drive a diffraction shader based on a first order approximation for efficient real-time rendering. Finally, we also drive analytic solutions for a few special cases of diffraction from our measurements and demonstrate realistic rendering results under complex light sources and environments

    Image-based relighting using room lighting basis

    No full text
    We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. The lighting basis includes diverse light sources such as the house lights and the natural illumination coming from the windows. Once the data is captured, we homogenize the reflectance field to take into account the variety of light source colours to minimise the tone difference in the reflectance field. Additionally, we measure the room dark level corresponding to a small amount of global illumination with all lights switched off and blinds drawn. The dark level, due to some light leakage through the blinds, is removed from the individual local lighting basis conditions and employed as an additional global lighting basis. Finally we optimize the projection of a desired lighting environment on to our room lighting basis to get a close approximation of the environment with our sparse lighting basis. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field including using a light stage and we demonstrate effective relighting results in two different room configurations. We believe our approach can be applied for practical relighting applications with general studio lighting

    SEWA DB: A rich database for audio-visual emotion and sentiment research in the wild

    Get PDF
    Natural human-computer interaction and audio-visual human behaviour sensing systems, which would achieve robust performance in-the-wild are more needed than ever as digital devices are becoming indispensable part of our life more and more. Accurately annotated real-world data are the crux in devising such systems. However, existing databases usually consider controlled settings, low demographic variability, and a single task. In this paper, we introduce the SEWA database of more than 2000 minutes of audio-visual data of 398 people coming from six cultures, 50% female, and uniformly spanning the age range of 18 to 65 years old. Subjects were recorded in two different contexts: while watching adverts and while discussing adverts in a video chat. The database includes rich annotations of the recordings in terms of facial landmarks, facial action units (FAU), various vocalisations, mirroring, and continuously valued valence, arousal, liking, agreement, and prototypic examples of (dis)liking. This database aims to be an extremely valuable resource for researchers in affective computing and automatic human sensing and is expected to push forward the research in human behaviour analysis, including cultural studies. Along with the database, we provide extensive baseline experiments for automatic FAU detection and automatic valence, arousal and (dis)liking intensity estimation

    ShaderLab Framework

    No full text
    ShaderLab is a teaching tool to solidify the fundamentals of Computer Graphics. The ShaderLab framework is based on Qt5, CMake, OpenGL 4.0, and GLSL and allows the student to modify GLSL shaders in an IDE-like environment. The framework is able to render shaded polyhedral geometry (.off/.obj), supports image-based post-processing, and allows to implement simple ray-tracing algorithms. This tool will be intensively tested by 140 CO317 Computer Graphics students in Spring 2017.ShaderLab is a teaching tool to solidify the fundamentals of Computer Graphics. The ShaderLab framework is based on Qt5, CMake, OpenGL 4.0, and GLSL and allows the student to modify GLSL shaders in an IDE-like environment. The framework is able to render shaded polyhedral geometry (.off/.obj), supports image-based post-processing, and allows to implement simple ray-tracing algorithms. This tool will be intensively tested by 140 CO317 Computer Graphics students in Spring 2017.1.

    Accessible GLSL Shader programming

    No full text
    Teaching fundamental principles of Computer Graphics requires a thoroughly prepared lecture alongside practical training. Modern graphics programming rarely provides a straightforward application programming interface (API) and the available APIs pose high entry barriers to students. Shader-based programming of standard graphics pipelines is often inaccessible through complex setup procedures and convoluted programming environments. In this paper we discuss an undergraduate entry level lecture with its according lab exercises. We present a programming framework that makes interactive graphics programming accessible while allowing to design individual tasks as instructive exercises to solidify the content of individual lecture units. The discussed teaching framework provides a well defined programmable graphics pipeline with geometry shading stages and image-based post processing functionality based on framebuffer objects. It is open-source and available online

    Acquiring spatially varying appearance of printed holographic surfaces

    No full text
    We present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such surfaces are increasingly found in various decorative materials such as gift bags, holographic papers, clothing and security holograms, and produce impressive visual effects that have not been previously acquired for realistic rendering. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying in periodicity and orientation over an entire sample in order to produce a wide range of diffraction effects such as gradients and kinematic (rotational) effects. Our proposed methods estimate these two parameters and allow an accurate reproduction of these effects in real-time. The first method simply uses a point light source to recover both the grating periodicity and orientation in the case of regular and stochastic textures. Under the assumption that the sample is made of the same repeated diffractive tile, good results can be obtained using just one to five photographs on a wide range of samples. The second method is based on polarization imaging and enables an independent high resolution measurement of the grating orientation and relative periodicity at each surface point. The method requires a minimum of four photographs for accurate results, does not assume repetition of an exemplar tile, and can even reveal minor fabrication defects. We present point light source renderings with both approaches that qualitatively match photographs, as well as real-time renderings under complex environmental illumination

    Factorized higher-order CNNs with an application to spatio-temporal emotion estimation

    No full text
    Training deep neural networks with spatio-temporal (i.e., 3D) or multidimensional convolutions of higher-order is computationally challenging due to millions of unknown parameters across dozens of layers. To alleviate this, one approach is to apply low-rank tensor decompositions to convolution kernels in order to compress the network and reduce its number of parameters. Alternatively, new convolutional blocks, such as MobileNet, can be directly designed for efficiency. In this paper, we unify these two approaches by proposing a tensor factorization framework for efficient multidimensional (separable) convolutions of higher-order. Interestingly, the proposed framework enables a novel higher-order transduction, allowing to train a network on a given domain (e.g., 2D images or N-dimensional data in general) and using transduction to generalize to higher-order data such as videos (or (N+K)-dimensional data in general), capturing for instance temporal dynamics while preserving the learnt spatial information. We apply the proposed methodology, coined CP-Higher-Order Convolution (HO-CPConv), to spatio-temporal facial emotion analysis. Most existing facial affect models focus on static imagery and discard all temporal information. This is due to the above-mentioned burden of training 3D convolutional nets and the lack of large bodies of video data annotated by experts. We address both issues with our proposed framework. Initial training is first done on static imagery before using transduction to generalize to the temporal domain. We demonstrate superior performance on three challenging large scale affect estimation datasets, AffectNet, SEWA, and AFEW-VA
    corecore